Skip to content

Suspected logic bug in 'get_optional_params' #10245

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

peteski22
Copy link
Contributor

@peteski22 peteski22 commented Apr 23, 2025

Bug: get_optional_params

PR for potential logic bug in get_optional_params (where the condition custom_llm_provider == "ollama" means it will only ever be ollama).

There was also a case where function_call could have been the only thing present in the non_default_params but the code didn't check/try to get the value to set it as optional_params["functions_unsupported_model"].

if (
  "functions" in non_default_params
  or "function_call" in non_default_params
  or "tools" in non_default_params
):

The PR adjusts this behaviour such that it is the same regardless of whether custom_llm_provider is ollama or not in the list of openai_compatible_providers.

I had to make a judgement call (so happy to get feedback here) that since the if/elif statement for ollama checked tools then functions, that function_call would come after those two. So the order of precedence is:

tools > functions > function_call

The elif code also seemed to treat it slightly differently and I'm not sure that was the intention.

Note:

I added __init__.py files in the tests to prevent collisions with naming as there's already a test_utils.py inside a test package but we didn't have them marked as packages.

Relevant issues

Refs: #10244

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on (make test-unit)[https://docs.litellm.ai/docs/extras/contributing_code]
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix
🧹 Refactoring

Changes

  • Changes behavior of get_optional_params to remove all function related optional params when they are supported and should be added to the prompt.

Receipts

image

Copy link

vercel bot commented Apr 23, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback May 18, 2025 8:30pm

Only the if block could ever be reached, elif and else wouldn't due to the logic check going on above: BerriAI@4d2337e#diff-9661b0d8f1d9f48433f2323d1102e963d3098f667f7a25caea79f16fa282f6ddR5269

Extract params functions for get optional params and add tests
Copy link

vercel bot commented May 6, 2025

Deployment failed with the following error:

The provided GitHub repository does not contain the requested branch or commit reference. Please ensure the repository is not empty.

@peteski22 peteski22 marked this pull request as ready for review May 6, 2025 16:08
@peteski22
Copy link
Contributor Author

Hey @krrishdholakia and @ishaan-jaff apologies for tagging, but I opened an issue (#10244) and this PR in response... as I think there's something off here in the code.

That said I'm not 100% I understand the full intent of the old code, so it may be that what I'm proposing isn't desirable and therefore may introduce subtly different bugs. I'd love it if anyone could take a look and let me know of any concerns or observations?

Thanks 😄

@krrishdholakia
Copy link
Contributor

Hi @peteski22 appreciate the PR, but i don't follow the test cases here. Can you give me a simple script to repro the issue?

@peteski22
Copy link
Contributor Author

Hi @peteski22 appreciate the PR, but i don't follow the test cases here. Can you give me a simple script to repro the issue?

👋🏼 Hey @krrishdholakia

I added more info and a small sample of code - on the issue, so it was captured in the same place.

RE: Test cases:

When trying to adjust/fix the logic errors I believe exist, I refactored some of the code and extracted parts to private methods in utils.

Methods:

  • _get_optional_params_defaults
  • _get_optional_params_non_default_params).

As a result, some of test cases are so that we can be certain we're getting the correct set of default params, non-default params from these new 'private' methods, and their values are what we expect. So if the code changes at a later date the tests will fail and we will be aware.

Tests specific to each method are grouped under the relevant class in the test file:

  • TestGetOptionalParamsDefaults
  • TestGetOptionalParamsNonDefaultParams

Then there are the tests that exercise the changes to the logic within get_optional_params, grouped under TestGetOptionalParms.

These tests try to exercise the conditions required to see the different code paths I called out in the issue.

  • One where we get an exception if the provider doesn't support the function calling (that wasn't ever being reached before).
  • One that deals with how ollama is a special case.
  • One that is used to verify how the order of precedence on params is applied (when args are supplied for all of: tools, functions, function_call).

I hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants